29 research outputs found

    Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models

    Get PDF
    This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques

    Acquisition and Processing of ToF and Stereo data

    Get PDF
    Providing a computer the capability to estimate the three-dimensional geometry of a scene is a fundamental problem in computer vision. A classical systems that has been adopted for solving this problem is the so-called stereo vision system (stereo system). Such a system is constituted by a couple of cameras and it exploits the principle of triangulation in order to provide an estimate of the framed scene. In the last ten years, new devices based on the time-of-flight principle have been proposed in order to solve the same problem, i.e., matricial Time-of-Flight range cameras (ToF cameras). This thesis focuses on the analysis of the two systems (ToF and stereo cam- eras) from a theoretical and an experimental point of view. ToF cameras are introduced in Chapter 2 and stereo systems in Chapter 3. In particular, for the case of the ToF cameras, a new formal model that describes the acquisition process is derived and presented. In order to understand strengths and weaknesses of such different systems, a comparison methodology is introduced and explained in Chapter 4. From the analysis of ToF cameras and stereo systems it is possible to understand the complementarity of the two systems and it is intuitive to figure that a synergic fusion of their data might provide an improvement in the quality of the measurements preformed by the two devices. In Chapter 5 a method for fusing ToF and stereo data based on a probability approach is presented. In Chapter 6 a method that exploits color and three-dimensional geometry information for solving the classical problem of scene segmentation is explaine

    Stereo Vision and Scene Segmentation

    Get PDF
    This chapter focuses on how segmentation robustness can be improved by 3D scene geometry provided by stereo vision systems, as they are simpler and relatively cheaper than most of current range cameras. In fact, two inexpensive cameras arranged in a rig are often enough to obtain good results. Another noteworthy characteristic motivating the choice of stereo systems is that they both provide 3D geometry and color information of the framed scene without requiring further hardware. Indeed, as it will be seen in following sections, 3D geometry extraction from a framed scene by a stereo system, also known as stereo reconstruction, may be eased and improved by scene segmentation since the correspondence research can be restricted within the same segment in the left and right images

    Acquisition and Processing of ToF and Stereo data

    Get PDF
    Providing a computer the capability to estimate the three-dimensional geometry of a scene is a fundamental problem in computer vision. A classical systems that has been adopted for solving this problem is the so-called stereo vision system (stereo system). Such a system is constituted by a couple of cameras and it exploits the principle of triangulation in order to provide an estimate of the framed scene. In the last ten years, new devices based on the time-of-flight principle have been proposed in order to solve the same problem, i.e., matricial Time-of-Flight range cameras (ToF cameras). This thesis focuses on the analysis of the two systems (ToF and stereo cam- eras) from a theoretical and an experimental point of view. ToF cameras are introduced in Chapter 2 and stereo systems in Chapter 3. In particular, for the case of the ToF cameras, a new formal model that describes the acquisition process is derived and presented. In order to understand strengths and weaknesses of such different systems, a comparison methodology is introduced and explained in Chapter 4. From the analysis of ToF cameras and stereo systems it is possible to understand the complementarity of the two systems and it is intuitive to figure that a synergic fusion of their data might provide an improvement in the quality of the measurements preformed by the two devices. In Chapter 5 a method for fusing ToF and stereo data based on a probability approach is presented. In Chapter 6 a method that exploits color and three-dimensional geometry information for solving the classical problem of scene segmentation is explainedFornire ai calcolatori la capacità di stimare la geometria tridimensionale di una scena è una delle sfide fondamentali nell’ambito della visione artificiale. Il classico approccio utilizzato per la risoluzione di tale problema prevede l’utilizzo di sistemi di visione stereoscopica. Tali sistemi sono costituiti da due telecamere. Il loro funzionamento si basa sul principio di triangolazione per stimare la configurazione geometrica di una scena. Nell’ultimo decennio, nuovi dispositivi basati sul principio del tempo di volo sono stati proposti allo scopo di risolvere il medesimo problema. Tali dispositivi sono chiamati sensori di profondità matriciali a tempo di volo. Questa tesi si sviluppa attorno all’analisi dei suddetti sistemi da un punto di vista teorico e sperimentale. I sensori a tempo di volo vengono descritti nel Capitolo 2, mentre i sistemi stereo nel Capitolo 3. In particolare viene introdotto un nuovo modello che descrive formalmente il processo di acquisizione dei sensori a tempo di volo. Nel Capitolo 4 viene descritta una metodologia per confrontare i due diversi sistemi. Da questa analisi emerge chiaramente la complementarità dei due sistemi. Questo permette di intuire come una fusione dei loro dati renda possibile un miglioramento della stima geometrica. Nel Capitolo 5 viene descritto un metodo che consente di fondere i dati del sistema stereo e del sensore a tempo di volo. Nel Capitolo 6 viene sviluppato un metodo per sfruttare l’informazione sul colore e sulla geometria di una scena per risolvere il classico problema di segmentazione della scen

    Fusion of Geometry and Color Information for Scene Segmentation

    Get PDF
    Scene segmentation is a well-known problem in computer vision traditionally tackled by exploiting only the color information from a single scene view. Recent hardware and software developments allow to estimate in real-time scene geometry and open the way for new scene segmentation approaches based on the fusion of both color and depth data. This paper follows this rationale and proposes a novel segmentation scheme where multidimensional vectors are used to jointly represent color and depth data and normalized cuts spectral clustering is applied to them in order to segment the scene. The critical issue of how to balance the two sources of information is solved by an automatic procedure based on an unsupervised metric for the segmentation quality. An extension of the proposed approach based on the exploitation of both images in stereo vision systems is also proposed. Different acquisition setups, like time-of-flight cameras, the Microsoft Kinect device and stereo vision systems have been used for the experimental validation. A comparison of the effectiveness of the different depth imaging systems for segmentation purposes is also presented. Experimental results show how the proposed algorithm outperforms scene segmentation algorithms based on geometry or color data alone and also other approaches that exploit both clues

    Real-Time image distortion correction: Analysis and evaluation of FPGA-compatible algorithms

    No full text
    Image distortion correction is a critical preprocessing step for a variety of computer vision and image processing algorithms. Standard real-Time software implementations are generally not suited for direct hardware porting, so appropriated versions need to be designed in order to obtain implementations deployable on FPGAs. In this paper, hardware-compatible techniques for image distortion correction are introduced and analyzed in details. The considered solutions are compared in terms of output quality by using a geometrical-error-based approach, with particular emphasis on robustness with respect to increasing lens distortion. The required amount of hardware resources is also estimated for each considered approach

    A probabilistic approach to ToF and stereo data fusion

    No full text
    Current 3D video applications require the availability of depth information, that can be acquired real-time by stereo vision systems and ToF cameras. In this paper, a heterogeneous acquisition system is considered, made of two high resolution standard cameras (stereo pair) and one ToF camera. The stereo system and the ToF camera must be properly calibrated together in order to operate jointly. Therefore this work introduces first a generalized multi-camera calibration technique which does not exploit only the luminance (color) information, but also the depth information extracted by the ToF camera. A probabilistic algorithm is then derived in order to obtain high quality depth information from the information of both the ToF camera and the stereo-pair. Experimental results show that the proposed calibration algorithm leads to a very accurate calibration suitable for the fusion algorithm, that allows for precise extraction of the depth information
    corecore